Updated on: 2022-07-30
The purpose of this project is to determine the effect of different risk measures on portfolio performance. This project uses the Variance/Standard Deviation, Semi-Variance/Semi-Deviation, Downside Deviation (DD), Value-at-Risk (VaR) and Conditional Value-at-Risk (CVaR) as the measures of risk for comparison.
The first part introduces the different risk measures using a single security. The second part shows how the different risk measures are used to optimize a portfolio with more than one security for a single-period. I first created a category of portfolios that minimizes the different risk measures, then created a category of optimal portfolios that minimizes the different risk measures for a given level of return. In each category of portfolios, I compared their performance and charted their cumulative returns.
The results in this project should not be taken as investment advice.
library(dplyr)
library(doParallel) # For parallel computation in foreach loops
library(PerformanceAnalytics) # For portfolio performance and risk analysis
library(PortfolioAnalytics) # For portfolio optimization and analysis
library(quantmod) # For obtaining historical prices from Yahoo Finance
In this section, the different risk measures are introduced, starting
with Variance/Standard Deviation as it is the fundamental risk measure
of the Mean-Variance Framework developed by Harry Markowitz.
Subsequently, I described the Semi-Variance/Semi-Deviation and Downside
Deviation that measures only the downside risk or the negative
fluctuations, followed by VaR and CVaR which measures the tail risk. The
PerformanceAnalytics
package provides the
necessary functions for the different risk measures.
I used the stock returns of Microsoft Corporation (MSFT) to
illustrate the different risk measures, which can be obtained using the
getSymbols
function in
quantmod
package.
<- "2010-01-01"
startdate <- "2022-05-31"
enddate
<- quantmod::getSymbols(Symbols = "MSFT", # Indicate a symbol that corresponds to stock in Yahoo Finance
MSFT src = "yahoo", # Indicate to search stock prices from Yahoo Finance
from = startdate, # Indicate a start date
to = enddate, # Indicate an end date
periodicity = "monthly", # Indicate the periodicity of prices to retrieve, can also be "daily" or "weekly"
auto.assign = F) # Indicate FALSE so that results are assigned to variable we indicate
head(MSFT, n = 4); tail(MSFT, n = 4)
## MSFT.Open MSFT.High MSFT.Low MSFT.Close MSFT.Volume MSFT.Adjusted
## 2010-01-01 30.62 31.24 27.66 28.18 1359650900 21.67012
## 2010-02-01 28.39 29.03 27.57 28.67 1074643300 22.04692
## 2010-03-01 28.77 30.57 28.24 29.29 1110237200 22.62903
## 2010-04-01 29.35 31.58 28.62 30.54 1319029500 23.59476
## MSFT.Open MSFT.High MSFT.Low MSFT.Close MSFT.Volume MSFT.Adjusted
## 2022-02-01 310.41 315.12 271.52 298.79 697050600 297.4806
## 2022-03-01 296.40 315.95 270.00 308.31 734334200 307.5936
## 2022-04-01 309.37 315.11 270.00 277.52 627343400 276.8751
## 2022-05-01 277.71 290.88 246.44 271.87 742902000 271.2383
colSums(is.na(MSFT)) # To check for NA values
## MSFT.Open MSFT.High MSFT.Low MSFT.Close MSFT.Volume
## 0 0 0 0 0
## MSFT.Adjusted
## 0
The adjusted price would be used to calculate the monthly return since it has been adjusted for dividend and splits.
<- PerformanceAnalytics::Return.calculate(prices = MSFT[,6], # Use the 6th column of MSFT (Adjusted Price)
MSFT_returns method = "discrete" # Calculate the arithmetic/discrete returns, use "log" for log/continuous returns
-1,] # Remove first row since first price observation cannot be used to calculate return
)[
head(MSFT_returns, 4)
## MSFT.Adjusted
## 2010-02-01 0.01738823
## 2010-03-01 0.02640300
## 2010-04-01 0.04267665
## 2010-05-01 -0.15520667
The variance measures the sum of squared deviations from the expected return and this can be written as:
\[ \sigma^2 = \sum^N_{n=1} \frac{(R_n - E(R))^2}{N} \]
The standard deviation is the square root of the variance or \(\sqrt{\sigma^2_i}\). We can calculate the
standard deviation of returns using the function
StdDev
.
<- PerformanceAnalytics::StdDev(R = MSFT_returns)
MSFT_stddev
MSFT_stddev
## [,1]
## StdDev 0.06172065
The value of 0.062 means that 68% of the expected monthly return would range between the mean return \(\pm\) 0.062 (1 sd).
However, the variance and standard deviation as a measure of risk are typically countered by two arguments. Firstly, it assumes that returns are normally distributed, which is usually not satisfied when dealing with financial returns. Secondly, it penalizes downside and upside risks equally, but investors are usually more concerned with downside risks than upside risks. In optimizing portfolio composition, the Mean-Variance Framework limits not just the losses, but also the gains. Therefore, different measures of risk have been developed to capture the left side of the distribution instead.
::chart.Histogram(MSFT_returns,
PerformanceAnalyticsmain = "Distribution of MSFT Monthly Returns",
methods = c("add.density", "add.normal"))
::skewness(MSFT_returns, method = "sample") PerformanceAnalytics
## [1] 0.07759841
::kurtosis(MSFT_returns, method = "sample") PerformanceAnalytics
## [1] 3.541433
The daily return distribution of MSFT is slightly positively skewed, and has a higher kurtosis of 3.54 than the normal distribution of 3. Majority of the daily returns are concentrated around the mean as can be seen in the chart, but the tails are fatter, meaning that there can be extremely large monthly losses or gains. Hence, the monthly returns of MSFT may not normally distributed (can be confirmed with a formal test, such as Jarque-Bera or Shapiro-Wilks Test).
The semi-variance and semi-deviation is similar to the variance and standard deviation, except that it only measures deviations below the expected return or the downside risk. The semi-deviation formula is written as:
\[ \sigma_{semi} = \sqrt{\sum_{n=1}^N \frac{\min(R_n - E(R), 0)^2}{N}} ~, \text{where N = total number of observations} \]
We can calculate the semi-deviation of MSFT daily returns using the
function SemiDeviation
.
<- PerformanceAnalytics::SemiDeviation(MSFT_returns)
MSFT_semidev
MSFT_semidev
## MSFT.Adjusted
## Semi-Deviation 0.04348707
By using the semi-deviaion as a measure of risk, MSFT now has lower risk than if we had used the standard deviation (0.043 < 0.062). However, we are unable to use semi-deviation to determine the range of values that the daily returns would fall under a given probability, as we did using the standard deviation.
The downside deviation is similar to the semi-deviation, but it measures deviations below a target return (called the Minimum Acceptable Return or MAR). The formula is written as:
\[ DD = \sqrt{\sum_{n=1}^N \frac{\min(R_n - \text{MAR}, 0)^2}{N}} ~, \text{where N = total number of observations} \]
Common values of MAR include the risk-free rate or zero, and if the mean of the returns is used, we would calculate the semi-deviation. Throughout this project, the MAR that I would use is 3%, which is approximately the yearly inflation rate.
<- PerformanceAnalytics::DownsideDeviation(MSFT_returns, MAR = 0.03/12) # divide by 252 to change to daily periodicity
MSFT_downdev
MSFT_downdev
## [,1]
## [1,] 0.03510311
Because the MAR is lesser than the expected MSFT daily return, we should obtain a smaller DD value than the semi-deviation. The DD might be a better measure of risk than the semi-deviation since the expected return may not be a good measure of the MAR.
VaR measures the amount of loss that could occur over a specific time period, given a confidence level. For example, a 95% VaR of $1 million with a time period of 1 month means that we are 95% confident that the loss would not exceed $1 million. We can also rephrase it as we are 5% confident that the loss is greater than $1 million over 1 month. The plot shows how VaR is indicated on a distribution of returns:
::chart.Histogram(MSFT_returns,
PerformanceAnalyticsmain = "Distribution of MSFT Monthly Returns with VaR",
methods = c("add.risk")) # Indicate to show VaR and Modified VaR
There are generally three basic measures of VaR, namely the Historical VaR (HVaR), Parametric VaR (PVaR) and Modified VaR.
For 95% HVaR, we determine the value of actual return distribution at the 5% probability. As it uses the historical returns to calculate the VaR, HVaR does not make any assumptions about the distribution of returns.
<- PerformanceAnalytics::VaR(MSFT_returns,
MSFT_hvar p = 0.95, # confidence level of 95%
method = "historical") # use historical VaR
MSFT_hvar
## MSFT.Adjusted
## VaR -0.07941816
Based on the HVaR, we are 95% confident that the monthly loss would not exceed 0.079 (or 7.9%). The statements “We are 95% confident that MSFT daily gain will be at least -0.079” and “We are 5% confident that MSFT monthly loss would exceed 0.079” are equivalent.
For 95% PVaR, we also determine the value of the return distribution at 5% probability, but we assume that the returns are normally distributed. PVaR is also known as the variance-covariance VaR.
<- PerformanceAnalytics::VaR(MSFT_returns,
MSFT_pvar p = 0.95,
method = "gaussian") # gaussian (normal) distribution
MSFT_pvar
## MSFT.Adjusted
## VaR -0.08209595
Based on the PVaR, we are 95% confident that the monthly loss in MSFT would not exceed 0.082.
Modified VaR (also called Cornish-Fisher VaR) accounts for the skewness and kurtosis of the return distribution, which should be preferred when returns are not normally distributed.
<- PerformanceAnalytics::VaR(MSFT_returns,
MSFT_modvar p = 0.95,
method = "modified") # use modified VaR
MSFT_modvar
## MSFT.Adjusted
## VaR -0.08023484
Based on the Modified VaR, we are 95% confident that the monthly loss in MSFT would not exceed 0.080.
While VaR allows us to determine the amount of loss that could occur within a time period at a given confidence interval, it does not actually quantify the loss that could occur beyond the VaR.
CVaR, also known as the expected shortfall (ES) or expected tail loss
(ETL) measures the loss occurring beyond the VaR. Similar to VaR, the
calculation of CVaR can be based on the historical, parametric, or
modified methods. In the
PerformanceAnalytics
package, the
functions CVaR
,
ES
and ETL
are equivalent. For a 95% CVaR, we calculate the average of the 5% tail
loss.
<- PerformanceAnalytics::CVaR(MSFT_returns,
MSFT_histcvar p = 0.95,
method = "historical")
MSFT_histcvar
## MSFT.Adjusted
## ES -0.1057026
Based on the historical method of CVaR, the average loss in the 5% tail of the return distribution is 0.106 or 10.6%.
<- PerformanceAnalytics::CVaR(MSFT_returns,
MSFT_parcvar p = 0.95,
method = "gaussian")
MSFT_parcvar
## MSFT.Adjusted
## ES -0.1077992
Based on the parametric CVaR, the average loss in the 5% tail of the return distribution is 0.108.
<- PerformanceAnalytics::CVaR(MSFT_returns,
MSFT_modcvar p = 0.95,
method = "modified")
MSFT_modcvar
## MSFT.Adjusted
## ES -0.1082884
Based on the modified CVaR, the average loss in the 5% tail of the return distribution is 0.108.
In this section, we have seen the different variations of risk measures, with the standard deviation, semi-deviation and downside deviation measuring fluctuations of returns around or below a specified value and VaR and CVaR measuring the tail losses.
Here are the values of the risk measures calculated in this section:
c(Std_Dev = MSFT_stddev, Semi_Dev = MSFT_semidev, Downside_Dev = MSFT_downdev)
## Std_Dev Semi_Dev Downside_Dev
## 0.06172065 0.04348707 0.03510311
c(H_VaR = MSFT_hvar, P_VaR = MSFT_pvar, Mod_VaR = MSFT_modvar,
H_CVaR = MSFT_histcvar, P_CVaR = MSFT_parcvar, Mod_CVaR = MSFT_modcvar)
## H_VaR P_VaR Mod_VaR H_CVaR P_CVaR Mod_CVaR
## -0.07941816 -0.08209595 -0.08023484 -0.10570261 -0.10779921 -0.10828844
For the VaR and CVaR calculations, it is assumed that historical values can determine future patterns of returns since we are using the actual historical data. Hence, VaR and CVaR have been countered by arguments of its viability on future data and events.
In this section, we start applying the different risk measures into the portfolio optimization problem. I have randomly chosen a few large-cap stocks listed in the U.S. for the portfolio optimization:
I have retrieved the historical price data of each ETF, keeping only the adjusted price column. I have used the same start and end date for data retrieval as Section 3.
<- c("AAPL", "MSFT", "JNJ", "PG", "MA", "HD", "PFE", "MRK", "COST", "NVDA")
tickers
<- NULL
price_data
for (ticker in tickers) {
<- cbind(price_data,
price_data ::getSymbols(ticker,
quantmodsrc = "yahoo",
from = startdate, to = enddate,
periodicity = "monthly",
auto.assign = F)[,6])
}
colnames(price_data) <- tickers
Let us take a look at the price_data
object we have
created to make sure that the data is correctly retrieved.
head(price_data, n = 4); tail(price_data, n = 4)
## AAPL MSFT JNJ PG MA HD PFE
## 2010-01-01 5.864814 21.67013 43.60136 42.02736 23.42509 20.89700 11.06335
## 2010-02-01 6.248349 22.04693 43.69846 43.51885 21.04429 23.27692 10.40524
## 2010-03-01 7.176042 22.62903 45.57046 43.51199 23.82336 24.13487 10.26411
## 2010-04-01 7.972737 23.59476 44.94144 42.74863 23.26436 26.47905 10.00676
## MRK COST NVDA
## 2010-01-01 23.64319 42.54160 3.533004
## 2010-02-01 22.83815 45.16387 3.718953
## 2010-03-01 23.12920 44.36643 3.994431
## 2010-04-01 21.92594 43.89832 3.606466
## AAPL MSFT JNJ PG MA HD PFE
## 2022-02-01 164.6680 297.4806 162.4763 154.0204 359.7685 311.9652 46.20180
## 2022-03-01 174.3538 307.5936 176.0984 150.9675 356.3385 295.6671 50.95584
## 2022-04-01 157.4187 276.8751 179.3078 158.6245 362.3210 298.5147 48.29830
## 2022-05-01 148.6216 271.2383 178.3837 146.9262 357.3223 300.8500 52.20587
## MRK COST NVDA
## 2022-02-01 75.28864 516.7301 243.7569
## 2022-03-01 80.66640 573.9240 272.7559
## 2022-04-01 87.97005 529.9416 185.4308
## 2022-05-01 91.28293 465.4167 186.6805
nrow(price_data); colSums(is.na(price_data))
## [1] 149
## AAPL MSFT JNJ PG MA HD PFE MRK COST NVDA
## 0 0 0 0 0 0 0 0 0 0
We have a total of 149 monthly observations per security and there are no NA values in the columns. Now that we have ensured the price data is in time order and there are no missing values, we can calculate the monthly returns of each security.
<- na.omit(PerformanceAnalytics::Return.calculate(price_data, method = "discrete"))
returns
head(returns); nrow(returns)
## AAPL MSFT JNJ PG MA
## 2010-02-01 0.06539594 0.01738799 0.002227133 0.0354887192 -0.10163486
## 2010-03-01 0.14847010 0.02640295 0.042839081 -0.0001576099 0.13205850
## 2010-04-01 0.11102151 0.04267637 -0.013803414 -0.0175436460 -0.02346444
## 2010-05-01 -0.01612470 -0.15520601 -0.093312726 -0.0096502733 -0.18607091
## 2010-06-01 -0.02082691 -0.10411525 0.022289138 -0.0181701441 -0.01110150
## 2010-07-01 0.02274083 0.12168612 -0.016424164 0.0196737568 0.05267390
## HD PFE MRK COST NVDA
## 2010-02-01 0.11388781 -0.05948579 -0.034049385 0.06164031 0.05263198
## 2010-03-01 0.03685870 -0.01356336 0.012744026 -0.01765659 0.07407407
## 2010-04-01 0.09712808 -0.02507281 -0.052023498 -0.01055097 -0.09712647
## 2010-05-01 -0.03888746 -0.08911489 -0.038527698 -0.01404887 -0.16359006
## 2010-06-01 -0.17099811 -0.05382235 0.037993549 -0.05547889 -0.22298312
## 2010-07-01 0.02280378 0.05189280 -0.003657048 0.03428774 -0.09990277
## [1] 148
With the returns
data, we can proceed with optimizing
the portfolios based on different risk measures.
Before optimizing portfolios, I have created an initial portfolio specification with common constraints. It includes a sum of weight and individual asset weight constraints, which can be used throughout the different portfolio optimization problems.
<- PortfolioAnalytics::portfolio.spec(assets = colnames(returns))
init_spec
# Sum of weights constrained to 1 but require some "wiggle" room for random portfolio generation
<- add.constraint(portfolio = init_spec,
init_spec type = "weight_sum",
min_sum = .99, max_sum = 1.01)
# Weight of each asset is constrained to min of 0 and max of 20% of portfolio
<- add.constraint(portfolio = init_spec,
init_spec type = "box",
min = 0, max = 0.20)
While the ROI
solver can be used with var/StdDev and
ETL/ES/CVaR risk objectives, it cannot be used with VaR and other
downside risk measures. As such, I have opted to use the random
portfolios method for optimization. Generating random portfolios that
satisfy the portfolio constraints and objectives may be a problem if a
small number of permutation was chosen, and may not allow for
satisfactory comparison of performance. To run a very large number of
permutations would require time and computing power.
Create random portfolios satisfying the above constraints:
set.seed(1)
<- random_portfolios(portfolio = init_spec,
randport permutations = 25000,
rp_method = "sample",
eliminate = T) # Eliminate portfolios that do not meet requirements
dim(randport) # 24999 random portfolios found
## [1] 24999 10
With the random portfolios, I created a foreach loop to find the
monthly returns of each random portfolio in randport
. To
speed things up, I ran the foreach
loop in
parallel using dopar
.
Calculate portfolio returns of every weight composition in
randport
:
# Important to make sure that we do not use all cores for computation. Check with detectCores(logical = F)
<- makeCluster(6) # Number of cores to use. I have 8, but may not be suitable for everyone.
mycluster
registerDoParallel(mycluster) # Register parallel computing
<- system.time(
total_time <- foreach(i = 1:nrow(randport), .combine = "cbind") %dopar% {
output1
<- PerformanceAnalytics::Return.portfolio(R = returns,
rp_return weights = randport[i,],
geometric = T,
rebalance_on = "quarters")
}# system.time to measure time take to run the loop
)
stopCluster(mycluster) # End parallel computing
total_time
## user system elapsed
## 36.57 2.53 390.06
dim(output1)
## [1] 148 24999
We can see that there are 148 rows and 24999 columns, corresponding
to number of observations in returns
and number of random
portfolios in randport
respectively. The
total_time
shows the time taken to calculate the 24,999
random portfolio returns. The time elapsed (in seconds) gives us an idea
of the amount of time needed to calculate returns of all random
portfolios.
In this section, I would start by creating the minimum variance
portfolio, followed by portfolios with minimum semi-variance, downside
deviation, VaR and CVaR. The creation of these portfolios require the
package PortfolioAnalytics
.
With the monthly returns of every random portfolio in
output1
, we can calculate the standard deviation and find
the minimum to obtain the minimum variance portfolio.
Optimize portfolio based on minimum variance:
<- PerformanceAnalytics::StdDev(R = output1, portfolio_method = "single")
stddev_search
min(stddev_search)
## [1] 0.0338599
<- randport[which.min(stddev_search),]
weight_MinVar
weight_MinVar
## AAPL MSFT JNJ PG MA HD PFE MRK COST NVDA
## 0.002 0.128 0.150 0.160 0.078 0.006 0.124 0.180 0.156 0.006
Optimize portfolio based on minimum semi-deviation:
<- PerformanceAnalytics::SemiDeviation(R = output1)
semidev_search
min(semidev_search)
## [1] 0.02370193
<- randport[which.min(semidev_search),]
weight_MinSemiDev
weight_MinSemiDev
## AAPL MSFT JNJ PG MA HD PFE MRK COST NVDA
## 0.004 0.096 0.126 0.196 0.062 0.086 0.086 0.200 0.132 0.008
Optimize portfolio based on minimum downside deviation:
<- PerformanceAnalytics::DownsideDeviation(R = output1, MAR = 0.03/12)
downdev_search
min(downdev_search)
## [1] 0.01780422
<- randport[which.min(downdev_search),]
weight_MinDD
weight_MinDD
## AAPL MSFT JNJ PG MA HD PFE MRK COST NVDA
## 0.066 0.076 0.058 0.178 0.074 0.090 0.034 0.198 0.200 0.016
For VaR, I would use the historical VaR, although one could experiment with other VaR methods. Since the VaR measure is negative, we want to find the highest negative measure (i.e. closer to or more than 0). If VaR was framed as a positive measure, we would simply minimize the VaR.
Optimize portfolio based on minimum historical VaR:
<- PerformanceAnalytics::VaR(R = output1, p = 0.95, method = "historical")
hvar_search
max(hvar_search)
## [1] -0.0348743
<- randport[which.max(hvar_search),]
weight_MinHVaR
weight_MinHVaR
## AAPL MSFT JNJ PG MA HD PFE MRK COST NVDA
## 0.006 0.090 0.184 0.088 0.128 0.016 0.104 0.176 0.158 0.042
Similar to the VaR portfolio optimization, I would use the historical CVaR.
Optimize portfolio based on minimum historical CVaR:
<- PerformanceAnalytics::CVaR(R = output1, p = 0.95, method = "historical")
hcvar_search
max(hcvar_search)
## [1] -0.0519599
<- randport[which.max(hcvar_search),]
weight_MinHCVaR
weight_MinHCVaR
## AAPL MSFT JNJ PG MA HD PFE MRK COST NVDA
## 0.036 0.022 0.036 0.186 0.060 0.084 0.108 0.190 0.198 0.080
Now that we have created the different portfolios by minimizing different risk measures, we can compare the weights of their components, their performance and chart their cumulative returns.
<- rbind(Min_Var = weight_MinVar,
port_weights Min_SemiDev = weight_MinSemiDev,
Min_DownDev = weight_MinDD,
Min_HVaR = weight_MinHVaR,
Min_HCVaR = weight_MinHCVaR)
port_weights
## AAPL MSFT JNJ PG MA HD PFE MRK COST NVDA
## Min_Var 0.002 0.128 0.150 0.160 0.078 0.006 0.124 0.180 0.156 0.006
## Min_SemiDev 0.004 0.096 0.126 0.196 0.062 0.086 0.086 0.200 0.132 0.008
## Min_DownDev 0.066 0.076 0.058 0.178 0.074 0.090 0.034 0.198 0.200 0.016
## Min_HVaR 0.006 0.090 0.184 0.088 0.128 0.016 0.104 0.176 0.158 0.042
## Min_HCVaR 0.036 0.022 0.036 0.186 0.060 0.084 0.108 0.190 0.198 0.080
Since we have calculated the monthly returns of each portfolio in
output1
, we can extract it and save it into a suitable data
frame.
<- cbind(output1[, which.min(stddev_search)],
return_comp which.min(semidev_search)],
output1[, which.min(downdev_search)],
output1[, which.max(hvar_search)],
output1[, which.max(hcvar_search)]) %>%
output1[, `colnames<-`(c("Min_Var", "Min_SemiDev", "Min_DownDev", "Min_HVaR", "Min_HCVaR"))
head(return_comp)
## Min_Var Min_SemiDev Min_DownDev Min_HVaR Min_HCVaR
## 2010-02-01 -0.002448919 0.009292151 0.019218587 -0.005926464 0.01640784
## 2010-03-01 0.017890211 0.018815140 0.026411373 0.028416111 0.02150238
## 2010-04-01 -0.015142304 -0.008469249 -0.001164731 -0.018536140 -0.01452503
## 2010-05-01 -0.071991867 -0.062316198 -0.053183300 -0.081758807 -0.05546422
## 2010-06-01 -0.024662608 -0.033965154 -0.037676917 -0.027510809 -0.04711299
## 2010-07-01 0.031076777 0.026244480 0.025547939 0.022879317 0.01534610
An appropriate benchmark would be needed if we wish to properly compare the performance of our portfolios. I am using the SPDR S&P 500 ETF, which tracks the S&P 500 Large-Cap Index.
<- quantmod::getSymbols(Symbols = "SPY",
benchmark src = "yahoo",
from = startdate, to = enddate,
periodicity = "monthly",
auto.assign = F)[,6] %>%
`colnames<-`("SPY")
<- na.omit(PerformanceAnalytics::Return.calculate(prices = benchmark, method = "discrete"))
benchmark_return
head(benchmark_return); dim(benchmark_return)
## SPY
## 2010-02-01 0.03119460
## 2010-03-01 0.05652915
## 2010-04-01 0.01965168
## 2010-05-01 -0.07945458
## 2010-06-01 -0.05623128
## 2010-07-01 0.07338330
## [1] 148 1
After compiling the returns, we can plot their cumulative returns
using chart.CumReturns
and compare their
drawdowns by using
::chart.CumReturns(R = cbind(return_comp, benchmark_return),
PerformanceAnalyticsgeometric = T, # Use geometric chaining for portfolio cumulative returns over time
main = "Cumulative Returns of Minimum Risk Portfolios",
plot.engine = "plotly")
::chart.Drawdown(R = cbind(return_comp, benchmark_return),
PerformanceAnalyticsgeometric = T,
main = "Drawdown of Minimum Risk Portfolios",
plot.engine = "plotly")
The minimum risk portfolios all had better cumulative returns than
the SPY benchmark. In descending order of cumulative returns based on
the chart, Min_HCVaR
> Min_DownDev
>
Min_HVaR
> Min_SemiDev
>
Min_Var
portfolios. In general, the SPY benchmark has
larger drawdowns than the minimum risk portfolios.
table.AnnualizedReturns(R = cbind(return_comp, benchmark_return), scale = 12, Rf = 0.02/12, geometric = T, digits = 4)
table.Stats(R = cbind(return_comp, benchmark_return), digits = 4)
table.CAPM(Ra = return_comp, Rb = benchmark_return, scale = 12, Rf = 0.02/12, digits = 4)
table.DownsideRisk(R = cbind(return_comp, benchmark_return), scale = 12, Rf = 0.02/12, MAR = 0.03/12, digits = 4)
In this section, I optimized portfolios based on a target return of
20% annually. Since we have generated random portfolios in Section 5, I
used that to determine the subset of portfolios that meets the target
return requirement. The reason for doing this is because random
portfolios generated using
random_portfolios
function does not
account for the return constraint.
From the output1
object in Section 5 which contains the
monthly return of all the random portfolios in randport
, I
would subset portfolios that have mean return of 20% (add some wiggle
room instead of a hard target, 19.9% to 20.1%) and find portfolios that
have the minimum of different risk measures.
<- PerformanceAnalytics::Mean.arithmetic(x = output1)
monthly_meanret
<- monthly_meanret > 0.199/12 & monthly_meanret < 0.201/12
target_return
<- output1[, target_return == "TRUE"]
subrp_return
dim(subrp_return)
## [1] 148 796
Out of 24,999 random portfolios generated in Section 5, only 796 portfolios meet the target return constraint.
<- randport[target_return == "TRUE",] sub_randport
Optimize portfolio based on mean-variance:
<- PerformanceAnalytics::StdDev(R = subrp_return, portfolio_method = "single")
meanvar_search
min(meanvar_search)
## [1] 0.03657894
<- sub_randport[which.min(meanvar_search),]
weight_MeanVar
weight_MeanVar
## AAPL MSFT JNJ PG MA HD PFE MRK COST NVDA
## 0.078 0.064 0.066 0.158 0.140 0.134 0.042 0.112 0.172 0.030
Optimize portfolio based on mean-semi deviation:
<- PerformanceAnalytics::SemiDeviation(R = subrp_return)
meansemidev_search
min(meansemidev_search)
## [1] 0.02599847
<- sub_randport[which.min(meansemidev_search),]
weight_MeanSemiDev
weight_MeanSemiDev
## AAPL MSFT JNJ PG MA HD PFE MRK COST NVDA
## 0.080 0.050 0.074 0.190 0.146 0.044 0.014 0.170 0.162 0.074
Optimize portfolio based on mean-downside deviation:
<- PerformanceAnalytics::DownsideDeviation(R = subrp_return, MAR = 0.03/12)
meandowndev_search
min(meandowndev_search)
## [1] 0.01869752
<- sub_randport[which.min(meandowndev_search),]
weight_MeanDownDev
weight_MeanDownDev
## AAPL MSFT JNJ PG MA HD PFE MRK COST NVDA
## 0.080 0.050 0.074 0.190 0.146 0.044 0.014 0.170 0.162 0.074
Optimize portfolio based on mean-VaR:
<- PerformanceAnalytics::VaR(R = subrp_return, p = 0.95, method = "historical")
meanVaR_search
max(meanVaR_search)
## [1] -0.03913748
<- sub_randport[which.max(meanVaR_search),]
weight_MeanVaR
weight_MeanVaR
## AAPL MSFT JNJ PG MA HD PFE MRK COST NVDA
## 0.012 0.132 0.088 0.198 0.156 0.004 0.000 0.190 0.098 0.118
Optimize portfolio based on mean-CVaR:
<- PerformanceAnalytics::CVaR(R = subrp_return, p = 0.95, method = "historical")
meanCVaR_search
max(meanCVaR_search)
## [1] -0.05385521
<- sub_randport[which.max(meanCVaR_search),]
weight_MeanCVaR
weight_MeanCVaR
## AAPL MSFT JNJ PG MA HD PFE MRK COST NVDA
## 0.180 0.066 0.020 0.176 0.032 0.046 0.112 0.140 0.178 0.042
With the different optimal portfolios created, we can now compare their weights, performance and chart their cumulative returns.
<- rbind(Mean_Var = weight_MeanVar,
weight_comp Mean_SemiDev = weight_MeanSemiDev,
Mean_DownDev = weight_MeanDownDev,
Mean_VaR = weight_MeanVaR,
Mean_CVaR = weight_MeanCVaR)
weight_comp
## AAPL MSFT JNJ PG MA HD PFE MRK COST NVDA
## Mean_Var 0.078 0.064 0.066 0.158 0.140 0.134 0.042 0.112 0.172 0.030
## Mean_SemiDev 0.080 0.050 0.074 0.190 0.146 0.044 0.014 0.170 0.162 0.074
## Mean_DownDev 0.080 0.050 0.074 0.190 0.146 0.044 0.014 0.170 0.162 0.074
## Mean_VaR 0.012 0.132 0.088 0.198 0.156 0.004 0.000 0.190 0.098 0.118
## Mean_CVaR 0.180 0.066 0.020 0.176 0.032 0.046 0.112 0.140 0.178 0.042
Again, based on the set of random portfolios generated, the
Mean_Var
and Mean_SemiDev
portfolios have the
same asset weights.
We can extract the monthly returns of the selected portfolios from
the subrp_return
object.
<- cbind(subrp_return[, which.min(meanvar_search)],
return_comp which.min(meansemidev_search)],
subrp_return[, which.min(meandowndev_search)],
subrp_return[, which.max(meanVaR_search)],
subrp_return[, which.max(meanCVaR_search)]) %>%
subrp_return[, `colnames<-`(c("Mean_Var", "Mean_SemiDev", "Mean_DownDev", "Mean_VaR", "Mean_CVaR"))
head(return_comp)
## Mean_Var Mean_SemiDev Mean_DownDev Mean_VaR Mean_CVaR
## 2010-02-01 0.018869168 0.01044041 0.01044041 0.0006851744 0.022949153
## 2010-03-01 0.038212734 0.04049569 0.04049569 0.0375384007 0.036163156
## 2010-04-01 0.005829933 -0.01058301 -0.01058301 -0.0233741280 0.007104962
## 2010-05-01 -0.064666350 -0.06786736 -0.06786736 -0.0878524201 -0.048369691
## 2010-06-01 -0.049968662 -0.03634933 -0.03634933 -0.0390847274 -0.040329682
## 2010-07-01 0.026686524 0.01738656 0.01738656 0.0179707117 0.025200578
::chart.CumReturns(R = cbind(return_comp, benchmark_return),
PerformanceAnalyticsgeometric = T, # Use geometric chaining for portfolio cumulative returns over time
main = "Cumulative Returns of Optimal Portfolios",
plot.engine = "plotly")
::chart.Drawdown(R = cbind(return_comp, benchmark_return),
PerformanceAnalyticsgeometric = T,
main = "Drawdown of Optimal Portfolios",
plot.engine = "plotly")
With an additional constraint of having 20% target return, the optimal portfolios have similar cumulative returns (since we added a target return). The SPY benchmark still has larger drawdowns compared to the optimal portfolios in general.
Below are the common statistics and performance metrics that can be compared to the SPY benchmark.
table.AnnualizedReturns(R = cbind(return_comp, benchmark_return), scale = 12, Rf = 0.02/12, geometric = T, digits = 4)
table.Stats(R = cbind(return_comp, benchmark_return), digits = 4)
table.CAPM(Ra = return_comp, Rb = benchmark_return, scale = 12, Rf = 0.02/12, digits = 4)
table.DownsideRisk(R = cbind(return_comp, benchmark_return), scale = 12, Rf = 0.02/12, MAR = 0.03/12, digits = 4)
While variance and standard deviation have been commonly used as a measure of risk, other downside risk measures may be better suited to quantify the risks that investors face. In Section 5, we see that the other minimum risk portfolios have higher cumulative returns than the minimum variance portfolio, but had drawdowns that were generally between that of the minimum variance portfolio and the SPY benchmark. In Section 6, the mean-variance portfolio had similar cumulative returns and drawdowns compared to the other optimal portfolios of different risk measures. This might be due to the limited permutations of random portfolios generated, which stems from the limited computational power and the expected time I was willing to wait for the random portfolios to be created.
It is important to note that the results are only applicable to the
random portfolios I had generated, which can be replicated via the
set.seed
when I created the random portfolios. By
increasing the number of permutations, one would likely find another set
of portfolios that minimizes the different risk measures.